No . 1957 Testing Multiple Forecasters
نویسندگان
چکیده
We consider a cross-calibration test of predictions by multiple potential experts in a stochastic environment. This test checks whether each expert is calibrated conditional on the predictions made by other experts. We show that this test is good in the sense that a true expert—one informed of the true distribution of the process—is guaranteed to pass the test no matter what the other potential experts do, and false experts will fail the test on all but a small (category one) set of true distributions. Furthermore, even when there is no true expert present, a test similar to cross-calibration cannot be simultaneously manipulated by multiple false experts, but at the cost of failing some true experts. In contrast, tests that allow false experts to make precise predictions can be jointly manipulated. ∗We wish to thank Nabil Al-Najjar, Brendan Beare, Dean Foster, Sergiu Hart, Stephen Morris, Wojciech Olszewski, Alvaro Sandroni, Jakub Steiner, and Jonathan Weinstein for helpful comments and suggestions. †email: [email protected] ‡email: [email protected]
منابع مشابه
Efficient Testing of Forecasts
Each day a weather forecaster predicts a probability of each type of weather for the next day. After n days, all the predicted probabilities and the real weather data are sent to a test which decides whether to accept the forecaster as possessing predicting power. Consider tests such that forecasters who know the distribution of nature are passed with high probability. Sandroni shows that any s...
متن کاملIncentive-Compatible Forecasting Competitions
We consider the design of forecasting competitions in which multiple forecasters make predictions about one or more independent events and compete for a single prize. We have two objectives: (1) to award the prize to the most accurate forecaster, and (2) to incentivize forecasters to report truthfully, so that forecasts are informative and forecasters need not spend any cognitive effort strateg...
متن کاملTesting for multiple-period predictability between serially dependent time series
This paper reports the results of a simulation study that considers the finite-sample performances of a range of approaches for testing multiple-period predictability between two potentially serially correlated time series. In many empirically relevant situations, but not all, most of the test statistics considered are significantly oversized. In contrast, both an analytical approach proposed i...
متن کاملTesting Multiple Forecasters∗
We consider a cross-calibration test of predictions by multiple potential experts in a stochastic environment. This test checks whether each expert is calibrated conditional on the predictions made by other experts. We show that this test is good in the sense that a true expert—one informed of the true distribution of the process—is guaranteed to pass the test no matter what the other potential...
متن کاملTesting the Value of Probability Forecasts for Calibrated Combining.
We combine the probability forecasts of a real GDP decline from the U.S. Survey of Professional Forecasters, after trimming the forecasts that do not have "value", as measured by the Kuiper Skill Score and in the sense of Merton (1981). For this purpose, we use a simple test to evaluate the probability forecasts. The proposed test does not require the probabilities to be converted to binary for...
متن کامل